Scalable Graph Neural Network Training

نویسندگان

چکیده

Graph Neural Networks (GNNs) are a new and increasingly popular family of deep neural network architectures to perform learning on graphs. Training them efficiently is challenging due the irregular nature graph data. The problem becomes even more when scaling large graphs that exceed capacity single devices. Standard approaches distributed DNN training, such as data model parallelism, do not directly apply GNNs. Instead, two different have emerged in literature: whole-graph sample-based training. In this paper, we review compare approaches. Scalability with both approaches, but make case research should focus training since it promising approach. Finally, recent systems supporting

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning Graph While Training: An Evolving Graph Convolutional Neural Network

Convolution Neural Networks on Graphs are important generalization and extension of classical CNNs. While previous works generally assumed that the graph structures of samples are regular with unified dimensions, in many applications, they are highly diverse or even not well defined. Under some circumstances, e.g. chemical molecular data, clustering or coarsening for simplifying the graphs is h...

متن کامل

Graph Based Convolutional Neural Network

In this paper we present a method for the application of Convolutional Neural Network (CNN) operators for use in domains which exhibit irregular spatial geometry by use of the spectral domain of a graph Laplacian, Figure 1. This allows learning of localized features in irregular domains by defining neighborhood relationships as edge weights between vertices in graph G. By formulating the domain...

متن کامل

Tensor graph convolutional neural network

In this paper, we propose a novel tensor graph convolutional neural network (TGCNN) to conduct convolution on factorizable graphs, for which here two types of problems are focused, one is sequential dynamic graphs and the other is cross-attribute graphs. Especially, we propose a graph preserving layer to memorize salient nodes of those factorized subgraphs, i.e. cross graph convolution and grap...

متن کامل

Incremental Convolutional Neural Network Training

Experimenting novel ideas on deep convolutional neural networks (DCNNs) with big datasets is hampered by the fact that network training requires huge computational resources in the terms of CPU and GPU power and hours. One option is to downscale the problem, e.g., less classes and less samples, but this is undesirable with DCNNs whose performance is largely data-dependent. In this work, we take...

متن کامل

Accelerating Recurrent Neural Network Training

An efficient algorithm for recurrent neural network training is presented. The approach increases the training speed for tasks where a length of the input sequence may vary significantly. The proposed approach is based on the optimal batch bucketing by input sequence length and data parallelization on multiple graphical processing units. The baseline training performance without sequence bucket...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Operating Systems Review

سال: 2021

ISSN: ['0163-5980', '1943-586X']

DOI: https://doi.org/10.1145/3469379.3469387